We consider the classic online learning and stochastic multi-armed bandit (MAB) problems, when at each step, the online policy can probe and find out which of a small number ($k$) of choices has better reward (or loss) before making its choice. In this model, we derive algorithms whose regret bounds have exponentially better dependence on the time horizon compared to the classic regret bounds. In particular, we show that probing with $k=2$ suffices to achieve time-independent regret bounds for online linear and convex optimization. The same number of probes improve the regret bound of stochastic MAB with independent arms from $O(\sqrt{nT})$ to $O(n^2 \log T)$, where $n$ is the number of arms and $T$ is the horizon length. For stochastic MAB, we also consider a stronger model where a probe reveals the reward values of the probed arms, and show that in this case, $k=3$ probes suffice to achieve parameter-independent constant regret, $O(n^2)$. Such regret bounds cannot be achieved even with full feedback after the play, showcasing the power of limited ``advice'' via probing before making the play. We also present extensions to the setting where the hints can be imperfect, and to the case of stochastic MAB where the rewards of the arms can be correlated.
translated by 谷歌翻译
The research area of algorithms with predictions has seen recent success showing how to incorporate machine learning into algorithm design to improve performance when the predictions are correct, while retaining worst-case guarantees when they are not. Most previous work has assumed that the algorithm has access to a single predictor. However, in practice, there are many machine learning methods available, often with incomparable generalization guarantees, making it hard to pick a best method a priori. In this work we consider scenarios where multiple predictors are available to the algorithm and the question is how to best utilize them. Ideally, we would like the algorithm's performance to depend on the quality of the best predictor. However, utilizing more predictions comes with a cost, since we now have to identify which prediction is the best. We study the use of multiple predictors for a number of fundamental problems, including matching, load balancing, and non-clairvoyant scheduling, which have been well-studied in the single predictor setting. For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance.
translated by 谷歌翻译
最近,基于用户满意度指标和上下文匪徒的自学方法显示出令人鼓舞的结果,以使对话AI系统的一致改进。但是,通过销售强盗学习目标直接针对此类指标通常会增加发生突然的策略更改以破坏当前用户体验的风险。在这项研究中,我们引入了一个可扩展的框架,用于通过用户定义的约束来支持单个域的细粒度探索目标。例如,我们可能希望确保商业关键领域(例如购物)的政策偏差更少,同时将更多的勘探预算分配给音乐域,例如音乐。此外,我们提出了一种新颖的元梯度学习方法,可扩展且实用,可以解决这个问题。提出的方法通过元目标适应违反惩罚条款,以鼓励跨领域的均衡约束满意度。我们使用来自现实世界中的AI的数据进行广泛的实验,对一组现实的约束基准测试。根据实验结果,我们证明了所提出的方法能够在策略价值和约束满意度之间达到最佳平衡。
translated by 谷歌翻译
生成精确反映客户行为的表示形式是在Alexa提供个性化技能路由体验的重要任务。目前,负责将Alexa流量路由到提供商或技能的动态路由(DR)团队依赖于两个功能作为个人信号:每个客户的每种技能使用情况的绝对交通计数和规范化的交通计数。他们俩都没有考虑基于网络的结构来进行客户与技能之间的交互,这些结构包含更丰富的信息以获得客户的喜好。在这项工作中,我们首先构建了基于图形的客户与调用技能的过去交互,在该技能中,用户请求(说服)被建模为边缘。然后,我们提出了一个基于图形卷积网络(GCN)的模型,即个性化的动态路由功能编码器(PDRFE),该模型生成了从构建图中学到的个性化客户表示。与现有模型相比,PDRFE能够在图形卷积函数中进一步捕获上下文信息。我们提出的模型的性能通过下游任务,缺陷预测来评估,该任务可预测从客户的嵌入及其触发技能的嵌入中的缺陷标签。与基准相比,我们提出的模型的跨熵度量提高了多达41%的改善。
translated by 谷歌翻译
我们提出了一个基于深度学习的外语学习平台,命名为FreeLalky,因为使用人形机器人NAO和各种深入学习模型,他们会受到对外语言的焦虑的人。嵌入在NAO的基于角色的对话系统为用户提供了一个有趣和一致的多转对话。此外,语法纠错系统促进了用户语法技能的改进。因此,我们的系统支持基于角色对话的个性化学习,并使用语法错误反馈促进语法学习用户。此外,我们通过人类评估通过替换与NAO机器人的谈话来替换真正的人类,验证了FreeTalky是否提供了减轻卵杆菌的实际帮助。
translated by 谷歌翻译
与单轴平面成像的2-D超声(US)相比,3-D US成像系统可以沿三个轴平面可视化容积。这允许完整的解剖学观察,这对于妇科(GYN)和产科(OB)应用是有用的。不幸的是,与2-D US相比,3-D US在分辨率中具有固有的限制。例如,在3-D US与3-D机械探针的情况下,例如,图像质量沿着光束方向可比较,但在其他两个轴向图像平面中通常观察到图像质量的显着劣化。为了解决这个问题,我们提出了一种新颖的无监督的深度学习方法来提高3-D US图像质量。特别是,使用{\ EM无与伦比的}高质量的2-D US图像作为参考,我们培训了最近提出的可切换Cyclean架构,以便在3-D中的每个映射平面都可以学习2-D US图像的图像质量。由于可切换架构,我们的网络还可以根据用户偏好提供对图像增强级别的实时控制,这是以用户为中心的扫描仪设置的理想选择。具有临床评估的广泛实验证实,我们的方法提供了显着提高的图像质量,也能成为用户友好的灵活性。
translated by 谷歌翻译
In this paper, we learn a diffusion model to generate 3D data on a scene-scale. Specifically, our model crafts a 3D scene consisting of multiple objects, while recent diffusion research has focused on a single object. To realize our goal, we represent a scene with discrete class labels, i.e., categorical distribution, to assign multiple objects into semantic categories. Thus, we extend discrete diffusion models to learn scene-scale categorical distributions. In addition, we validate that a latent diffusion model can reduce computation costs for training and deploying. To the best of our knowledge, our work is the first to apply discrete and latent diffusion for 3D categorical data on a scene-scale. We further propose to perform semantic scene completion (SSC) by learning a conditional distribution using our diffusion model, where the condition is a partial observation in a sparse point cloud. In experiments, we empirically show that our diffusion models not only generate reasonable scenes, but also perform the scene completion task better than a discriminative model. Our code and models are available at https://github.com/zoomin-lee/scene-scale-diffusion
translated by 谷歌翻译
With the research directions described in this thesis, we seek to address the critical challenges in designing recommender systems that can understand the dynamics of continuous-time event sequences. We follow a ground-up approach, i.e., first, we address the problems that may arise due to the poor quality of CTES data being fed into a recommender system. Later, we handle the task of designing accurate recommender systems. To improve the quality of the CTES data, we address a fundamental problem of overcoming missing events in temporal sequences. Moreover, to provide accurate sequence modeling frameworks, we design solutions for points-of-interest recommendation, i.e., models that can handle spatial mobility data of users to various POI check-ins and recommend candidate locations for the next check-in. Lastly, we highlight that the capabilities of the proposed models can have applications beyond recommender systems, and we extend their abilities to design solutions for large-scale CTES retrieval and human activity prediction. A significant part of this thesis uses the idea of modeling the underlying distribution of CTES via neural marked temporal point processes (MTPP). Traditional MTPP models are stochastic processes that utilize a fixed formulation to capture the generative mechanism of a sequence of discrete events localized in continuous time. In contrast, neural MTPP combine the underlying ideas from the point process literature with modern deep learning architectures. The ability of deep-learning models as accurate function approximators has led to a significant gain in the predictive prowess of neural MTPP models. In this thesis, we utilize and present several neural network-based enhancements for the current MTPP frameworks for the aforementioned real-world applications.
translated by 谷歌翻译
In this paper, we consider the inventory management (IM) problem where we need to make replenishment decisions for a large number of stock keeping units (SKUs) to balance their supply and demand. In our setting, the constraint on the shared resources (such as the inventory capacity) couples the otherwise independent control for each SKU. We formulate the problem with this structure as Shared-Resource Stochastic Game (SRSG)and propose an efficient algorithm called Context-aware Decentralized PPO (CD-PPO). Through extensive experiments, we demonstrate that CD-PPO can accelerate the learning procedure compared with standard MARL algorithms.
translated by 谷歌翻译
We propose eXtensible Prompt (X-Prompt) for prompting a large language model (LLM) beyond natural language (NL). X-Prompt instructs an LLM with not only NL but also an extensible vocabulary of imaginary words that are introduced to help represent what NL words hardly describe, allowing a prompt to be more descriptive. Like NL prompts, X-Prompt is out-of-distribution (OOD) robust, for which we propose context-guided learning with prompt augmentation to learn its imaginary words for general usability, enabling them to use in different prompt contexts for fine-grain specifications. The promising results of X-Prompt demonstrate its potential of approaching advanced interaction between humans and LLMs to bridge their communication gap.
translated by 谷歌翻译